Magnification Laws of Winner-Relaxing and Winner-Enhancing Kohonen Feature Maps

نویسنده

  • Jens Christian Claussen
چکیده

Self-Organizing Maps are models for unsupervised representation formation of cortical receptor fields by stimuli-driven self-organization in laterally coupled winner-take-all feedforward structures. This paper discusses modifications of the original Kohonen model that were motivated by a potential function, in their ability to set up a neural mapping of maximal mutual information. Enhancing the winner update, instead of relaxing it, results in an algorithm that generates an infomax map corresponding to magnification exponent of one. Despite there may be more than one algorithm showing the same magnification exponent, the magnification law is an experimentally accessible quantity and therefore suitable for quantitative description of neural optimization principles. Self-Organizing Maps are one of most successful paradigms in mathematical modelling of special aspects of brain function, despite that a quantitative understanding of the neurobiological learning dynamics and its implications on the mathematical process of structure formation are still lacking. A biological discrimination between models may be difficult, and it is not completely clear [1] what optimization goals are dominant in the biological development for e.g. skin, auditory, olfactory or retina receptor fields. All of them roughly show a self-organizing ordering as can most simply be described by the SelfOrganizing Feature Map [2] defined as follows: Every stimulus in input space (receptor field) is assigned to a so-called winner (or center of excitation) s where the distance |v−ws| to the stimulus is minimal. According to Kohonen, all weight vectors are updated by δwr = ηgrs · (v −wr). (1) This can be interpreted as a Hebbian learning rule; η is the learning rate, and grs determines the (pre-defined) topology of the neural layer. While Kohonen chose g to be 1 for a fixed neighbourhood, and 0 elsewhere, a Gaussian kernel (with a width decreasing in time) is more common. The Self-Organizing Map concept can be used with regular lattices of any dimension (although 1, 2 or 3 dimensions are preferred for easy visualization), with an irregular lattice, none (Vector Quantization) [3], or a neural gas [4] where the coefficients g are determined by rank of distance in input space. In all cases, learning can be implemented by serial (stochastic), batch, or parallel updates. ⋆ Published in: V. Capasso (Ed.): Mathematical Modeling & Computing in Biology and Medicine, p. 17–22, Miriam Series, Progetto Leonardo, Bologna (2003).

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Winner-relaxing and winner-enhancing Kohonen maps: Maximal mutual information from enhancing the winner

The magnification behaviour of a generalized family of self-organizing feature maps, the Winner Relaxing and Winner Enhancing Kohonen algorithms is analyzed by the magnification law in the one-dimensional case, which can be obtained analytically. The Winner-Enhancing case allows to acheive a magnification exponent of one and therefore provides optimal mapping in the sense of information theory....

متن کامل

Generalized Winner-Relaxing Kohonen Self-Organizing Feature Maps

We calculate analytically the magnification behaviour of a generalized family of self-organizing feature maps inspired by a variant introduced by Kohonen in 1991, denoted here as Winner Relaxing Kohonen algorithm, which is shown here to have a magnification exponent of 4/7. Motivated by the observation that a modification of the learning rule for the winner neuron influences the magnification l...

متن کامل

Winner-Relaxing Self-Organizing Maps

A new family of self-organizing maps, the Winner-Relaxing Kohonen Algorithm, is introduced as a generalization of a variant given by Kohonen in 1991. The magnification behaviour is calculated analytically. For the original variant a magnification exponent of 4/7 is derived; the generalized version allows to steer the magnification in the wide range from exponent 1/2 to 1 in the one-dimensional ...

متن کامل

Magnification Control in Winner Relaxing Neural Gas

We transfer the idea of winner relaxing learning from the self-organizing map to the neural gas to enable magnification control independently of the shape of the data distribution.

متن کامل

Towards an Information Density Measure for Neural Feature Maps

Many neural models have been suggested for the development of feature maps in cortical areas. Undoubtedly the most popular model is the Kohonen self-organizing map (SOM). Once the map has been learned, this network uses a competitive winner-take-all (WTA) approach to choose a singlèbest' output neuron on a (typically) 2D grid for each presented input pattern. Cortical maps in biological organis...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/cs/0701003  شماره 

صفحات  -

تاریخ انتشار 2006